The miracle of combining forecasts

1
Life gifts us very few miracles. So when a miracle happens, we must be prepared to embrace it, and appreciate its worth.
Dogs in snow
Winter Storm Pax

In 1947, in New York City, there was the Miracle on 34th Street.

In 1980, at the Winter Olympics, there was the miracle on ice.

In 1992, at the Academy Awards, there was the miracle of Marisa Tomei winning the Best Supporting Actress Oscar.

And in 2014, on Wednesday afternoon this week, there was the miracle of getting off the SAS campus in the middle of winter storm Pax.

There are also those "officially recognized" miracles that can land a person in sainthood. These frequently involve images burned into pancakes or grown into fruits and vegetables (e.g. the Richard Nixon eggplant). While I have little chance of becoming a saint, I have witnessed a miracle in the realm of business forecasting: the miracle of combining forecasts.

A Miracle of Business Forecasting

Last week's installment of The BFD highlighted an interview with Greg Fishel, Chief Meteorologist at WRAL, on the topic of combined or "ensemble" models in weather forecasting. In this application, multiple perturbations of initial conditions (minor changes to temperature, humidity, etc.) are fed through the same forecasting model. If the various perturbations deliver wildly different results, this indicates a high level of uncertainty in the forecast. If the various perturbations deliver very similar results, the weather scientists consider this reason for good confidence in the forecast.

In Fishel's weather forecasting example, they create the ensemble forecast by passing multiple variations of the input data through the same forecasting model. This is different from typical business forecasting, where we feed the same initial conditions (e.g. a time series of historical sales) into multiple models. We then take a composite (e.g. an average) of the resulting forecasts, and that becomes our combined or ensemble forecast.

In 2001, J. Scott Armstrong published a valuable summary of the literature in "Combining Forecasts" in his Principles of Forecasting. Armstrong's work is referenced heavily in a recent piece by Graefe, Armstrong, Jones, and Cuzan in the International Journal of Forecasting (30 (2014) 43-54). Graefe et. al. remind us of the conditions under which combining is most valuable, and illustrate with an application to election forecasting. Since I am not much fond of politics or politicians, we'll skip the elections part, but look at the conditions where combining can help:

  • "Combining is applicable to many estimation and forecasting problems. The only exception is when strong prior evidence exists that one method is best and the likelihood of bracketing is low" (p.44). ["Bracketing" occurs when one forecast was higher than the actual, and one was lower.] This suggests that combining forecasts should be our default method. We should only select one particular model when there is strong evidence it is best. However in most real-world forecasting situations, we cannot know in advance which forecast will be most accurate.
  • Combine forecasts from several methods. Armstrong recommended using at least five forecasts. These forecasts should be generated using methods that adhere to accepted forecasting procedures for the given situation. (That is, don't just make up a bunch of forecasts willy-nilly.)
  • "Combining forecasts is most valuable when the individual forecasts are diverse in the methods used and the theories and data upon which they are based" (p.45). Such forecasts are likely to include different biases and random errors -- that we expect would help cancel each other out.
  • The larger the difference in the underlying theories or methods of component forecasts, the greater the extent and probability of error reduction through combining.
  • Weight the forecasts equally when you combine them. "A large body of analytical and empirical evidence supports the use of equal weights" (p.46). There is no guarantee that equal weights will produce the best results, but this is simple to do, easy to explain, and a fancier weighting method is probably not worth the effort.
  • "While combining is useful under all conditions, it is especially valuable in situations involving high levels of uncertainty" (p.51).

So forget about achieving sainthood the hard way. (If burning a caricature of Winston Churchill in a grilled cheese sandwich were easy, I'd be Pope by now). Instead, deliver a miracle to your organization the easy way -- by combining forecasts.

[For further discussion of combining forecasts in SAS forecasting software, see the 2012 SAS Global Forum paper "Combined Forecasts: What to Do When One Model Isn't Good Enough" by my colleagues Ed Blair, Michael Leonard, and Bruce Elsheimer.]

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

1 Comment

  1. Rick Wicklin

    In the mathematics of differential equations, the act of perturbing the initial conditions is known as "sensitivity analysis." A robust model is one that gives similar forecasts for each perturbed parameter. A model that has widely divergent forecasts is said to have "sensitive dependence on initial conditions," which is a fancy way of saying "actual results may vary"! Models of weather systems are inherently unstable over long time periods, which is why forecasters rarely predict weather more than a few days in advance. And models of precipitation are more sensitive than models of temperature.

Back to Top